自成立以来,建立在广泛任务中表现出色的普通代理的任务一直是强化学习的重要目标。这个问题一直是对Alarge工作体系的研究的主题,并且经常通过观察Atari 57基准中包含的广泛范围环境的分数来衡量的性能。 Agent57是所有57场比赛中第一个超过人类基准的代理商,但这是以数据效率差的代价,需要实现近800亿帧的经验。以Agent57为起点,我们采用了各种各样的形式,以降低超过人类基线所需的经验200倍。在减少数据制度和Propose有效的解决方案时,我们遇到了一系列不稳定性和瓶颈,以构建更强大,更有效的代理。我们还使用诸如Muesli和Muzero之类的高性能方法证明了竞争性的性能。 TOOUR方法的四个关键组成部分是(1)近似信任区域方法,该方法可以从TheOnline网络中稳定引导,(2)损失和优先级的归一化方案,在学习具有广泛量表的一组值函数时,可以提高鲁棒性, (3)改进的体系结构采用了NFNET的技术技术来利用更深的网络而无需标准化层,并且(4)政策蒸馏方法可使瞬时贪婪的策略加班。
translated by 谷歌翻译
Early detection of relevant locations in a piece of news is especially important in extreme events such as environmental disasters, war conflicts, disease outbreaks, or political turmoils. Additionally, this detection also helps recommender systems to promote relevant news based on user locations. Note that, when the relevant locations are not mentioned explicitly in the text, state-of-the-art methods typically fail to recognize them because these methods rely on syntactic recognition. In contrast, by incorporating a knowledge base and connecting entities with their locations, our system successfully infers the relevant locations even when they are not mentioned explicitly in the text. To evaluate the effectiveness of our approach, and due to the lack of datasets in this area, we also contribute to the research community with a gold-standard multilingual news-location dataset, NewsLOC. It contains the annotation of the relevant locations (and their WikiData IDs) of 600+ Wikinews articles in five different languages: English, French, German, Italian, and Spanish. Through experimental evaluations, we show that our proposed system outperforms the baselines and the fine-tuned version of the model using semi-supervised data that increases the classification rate. The source code and the NewsLOC dataset are publicly available for being used by the research community at https://github.com/vsuarezpaniagua/NewsLocation.
translated by 谷歌翻译
Experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end-to-end experiences. This results in a unique set of machine learning problems to help understand how people feel, discover issues they care about, and find which actions need to be taken on data that are different in content and distribution from traditional NLP domains. In this paper, we present a case study of building text analysis applications that perform multiple classification tasks efficiently in 12 languages in the nascent business area of experience management. In order to scale up modern ML methods on experience data, we leverage cross lingual and multi-task modeling techniques to consolidate our models into a single deployment to avoid overhead. We also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality. Our findings show that multi-task modeling improves task performance for a subset of experience management tasks in both XLM-R and mBert architectures. Among the compressed architectures we explored, we found that MiniLM achieved the best compression/performance tradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60% average task degradation (or 3.29x speedup with 1.71% degradation) and estimated savings of 44% over using the original full-size model. These results demonstrate a successful scaling up of text classification for the challenging new area of ML for experience management.
translated by 谷歌翻译
Only increasing accuracy without considering uncertainty may negatively impact Deep Neural Network (DNN) decision-making and decrease its reliability. This paper proposes five combined preprocessing and post-processing methods for time-series binary classification problems that simultaneously increase the accuracy and reliability of DNN outputs applied in a 5G UAV security dataset. These techniques use DNN outputs as input parameters and process them in different ways. Two methods use a well-known Machine Learning (ML) algorithm as a complement, and the other three use only confidence values that the DNN estimates. We compare seven different metrics, such as the Expected Calibration Error (ECE), Maximum Calibration Error (MCE), Mean Confidence (MC), Mean Accuracy (MA), Normalized Negative Log Likelihood (NLL), Brier Score Loss (BSL), and Reliability Score (RS) and the tradeoffs between them to evaluate the proposed hybrid algorithms. First, we show that the eXtreme Gradient Boosting (XGB) classifier might not be reliable for binary classification under the conditions this work presents. Second, we demonstrate that at least one of the potential methods can achieve better results than the classification in the DNN softmax layer. Finally, we show that the prospective methods may improve accuracy and reliability with better uncertainty calibration based on the assumption that the RS determines the difference between MC and MA metrics, and this difference should be zero to increase reliability. For example, Method 3 presents the best RS of 0.65 even when compared to the XGB classifier, which achieves RS of 7.22.
translated by 谷歌翻译
重要性采样(IS)是一种强大的蒙特卡洛(MC)方法,用于近似积分,例如在贝叶斯推论的背景下。在IS中,从所谓的提案分布中模拟样品,并且该提案的选择是实现高性能的关键。在自适应IS(AIS)方法中,一组建议是迭代改进的。 AIS是一种相关和及时的方法论,尽管仍有许多局限性尚待克服,例如,高维和多模式问题的维度诅咒。此外,汉密尔顿蒙特卡洛(HMC)算法在机器学习和统计数据中变得越来越流行。 HMC具有几个吸引人的特征,例如其探索性行为,尤其是在其他方法遭受的情况下,尤其是在高维目标中。在本文中,我们介绍了新型的汉密尔顿自适应重要性采样(HAIS)方法。 Hais使用平行的HMC链实现了两步自适应过程,每次迭代都合作。拟议的HAI有效地适应了一系列建议,从而提取了HMC的优势。 HAI可以理解为具有额外重采样步骤的通用分层AIS家族的特定实例。 HAIS在高维问题W.R.T.方面取得了重大的绩效提高。最先进的算法。我们讨论了HAI的统计特性,并在两个具有挑战性的例子中显示了其高性能。
translated by 谷歌翻译
基于连续的潜在空间(例如变异自动编码器)的概率模型可以理解为无数混合模型,其中组件连续取决于潜在代码。它们具有用于生成和概率建模的表达性工具,但与可牵引的概率推断不符,即计算代表概率分布的边际和条件。同时,可以将概率模型(例如概率电路(PC))理解为层次离散混合模型,从而使它们可以执行精确的推断,但是与连续的潜在空间模型相比,它们通常显示出低于标准的性能。在本文中,我们研究了一种混合方法,即具有较小潜在尺寸的可拖动模型的连续混合物。尽管这些模型在分析上是棘手的,但基于一组有限的集成点,它们非常适合数值集成方案。有足够数量的集成点,近似值变得精确。此外,使用一组有限的集成点,可以将近似方法编译成PC中,以“在近似模型中的精确推断”执行。在实验中,我们表明这种简单的方案被证明非常有效,因为PC在许多标准密度估计基准上以这种方式为可拖动模型设定了新的最新模型。
translated by 谷歌翻译
大多数人工智能(AI)研究都集中在高收入国家,其中成像数据,IT基础设施和临床专业知识丰富。但是,在需要医学成像的有限资源环境中取得了较慢的进步。例如,在撒哈拉以南非洲,由于获得产前筛查的机会有限,围产期死亡率的率很高。在这些国家,可以实施AI模型,以帮助临床医生获得胎儿超声平面以诊断胎儿异常。到目前为止,已经提出了深度学习模型来识别标准的胎儿平面,但是没有证据表明它们能够概括获得高端超声设备和数据的中心。这项工作研究了不同的策略,以减少在高资源临床中心训练并转移到新的低资源中心的胎儿平面分类模型的域转移效果。为此,首先在丹麦的一个新中心对1,008例患者的新中心进行评估,接受了1,008名患者的新中心,后来对五个非洲中心(埃及,阿尔及利亚,乌干达,加纳和马拉维进行了相同的表现),首先在丹麦的一个新中心进行评估。 )每个患者有25名。结果表明,转移学习方法可以是将小型非洲样本与发达国家现有的大规模数据库相结合的解决方案。特别是,该模型可以通过将召回率提高到0.92 \ pm 0.04 $,同时又可以维持高精度。该框架显示了在临床中心构建可概括的新AI模型的希望,该模型在具有挑战性和异质条件下获得的数据有限,并呼吁进行进一步的研究,以开发用于资源较少的国家 /地区的AI可用性的新解决方案。
translated by 谷歌翻译
在本文中,我们展示了一种独特的配方,可以通过将预处理技术融合到深度学习模型中来增强音频机学习方法的有效性。我们的解决方案通过通过训练而不是昂贵的随机搜索来优化超参数来加速培训和推理性能,从而从音频信号中构建可靠的蚊子探测器。此处介绍的实验和结果是MOS C提交ACM 2022挑战的一部分。在未发表的测试集上,我们的结果优于已发布的基线212%。我们认为,这是建立强大的生物声学系统的最好的现实世界中的一个例子之一,该系统在嘈杂的条件下提供可靠的蚊子检测。
translated by 谷歌翻译
基于相关的回声声音浮标收集的数据,这些浮标附带了热带海洋中的鱼聚集设备(DFAD),当前的研究应用机器学习方案来检查金枪鱼学校关联的时间趋势以漂移对象。使用二进制输出,将文献中通常使用的指标适应以下事实,即考虑到DFAD下的整个金枪鱼聚合。金枪鱼首次在25至43天之间进行了金枪鱼的中位时间,取决于海洋,最长的浸泡和殖民时间在太平洋中注册。金枪鱼学校的连续停留时间通常比连续缺勤时间(分别在5到7天和9天和11天之间)短,与以前的研究结果一致。使用回归输出,估计两个新型指标,即聚集时间和分解时间,以进一步了解聚集过程的对称性。在所有海洋中,金枪鱼聚合离开DFAD所需的时间并不比聚集形成所花费的时间大得多。讨论了这些结果在“生态陷阱”假设的背景下的价值,并提出了进一步的分析以丰富和利用该数据源。
translated by 谷歌翻译
有效解决优化问题是当今行业中的关键问题之一。此任务主要依赖于具有可扩展性问题和处理限制的经典算法。量子计算已经出现以挑战这些类型的问题。在本文中,我们专注于基于量子步行的大都市杂货量子算法。我们使用此算法来构建一个名为Quantum Metropolis求解器(QMS)的量子软件工具。我们验证具有N-QueEN问题的QM,以在一个示例中显示出潜在的量子优势,该示例很容易被外推到人工智能域。我们进行不同的模拟以验证QMS的性能及其配置。
translated by 谷歌翻译